有趣的认识对于移动机器人的自主探索中的决策至关重要。先前的方法提出了一种无监督的在线学习方法,该方法可以适应环境并迅速检测有趣的场景,但缺乏适应人类知名对象的能力。为了解决这个问题,我们引入了人际关系框架,空中互动,该框架可以通过几乎没有在线学习来检测人类信息的对象。为了减少沟通带宽,我们首先在无人管的车辆上应用在线无监督的学习算法,以识别有趣的识别,然后仅将潜在的有趣场景发送到一个基础站,进行人类检查。人类操作员能够为特定有趣的对象绘制和提供边界框注释,这些对象被发送回机器人,通过几次学习来检测类似的对象。该机器人只使用少数人标记的示例,才能在任务中学习新颖的对象类别,并检测包含对象的有趣场景。我们在各种有趣的场景识别数据集上评估我们的方法。据我们所知,这是自主探索的第一个人类知识的几杆对象检测框架。
translated by 谷歌翻译
由于元学习策略的成功,几次对象检测迅速进展。然而,现有方法中的微调阶段的要求是时间分子,并且显着阻碍了其在实时应用中的使用,例如对低功耗机器人的自主勘探。为了解决这个问题,我们展示了一个全新的架构,Airdet,它通过学习级别与支持图像的无政府主义关系没有微调。具体地,我们提出了一种支持引导的串级(SCS)特征融合网络来生成对象提案,用于拍摄聚合的全局本地关系网络(GLR),以及基于关系的基本嵌入网络(R-PEN),用于精确本土化。令人惊讶的是,在Coco和Pascal VOC数据集上进行详尽的实验,旨在达到比详尽的Fineetuned方法相当或更好的结果,达到基线的提高高达40-60%。为了我们的兴奋,Airdet在多尺度对象,尤其是小型物体上获得有利性能。此外,我们提出了来自DARPA地下挑战的实际勘探测试的评估结果,这强烈验证了机器人中AIRDET的可行性。将公开源代码,预先训练的模型以及真实世界的勘探数据。
translated by 谷歌翻译
自治机器人经常需要检测“有趣”的场景来决定进一步的探索,或决定哪些数据分享合作。这些方案通常需要快速部署,几乎没有培训数据。事先工作基于来自同一分配的数据考虑“有趣”。相反,我们建议开发一种方法,它在线自动适应环境,以便快速报告有趣的场景。要解决这个问题,我们开发了一种新的翻译不变的视觉记忆,并为长期,短期和在线学习设计了一个三级架构,这使得该系统能够学习人类的体验,环境知识和在线分别适应。借助该系统,我们在地下隧道环境中的最先进的无人监督方法平均达到高度高20%。我们对机器人勘探情景的监督方法表现出相当的性能,显示了我们的方法的功效。我们预计呈现的方法将在机器人有趣的识别勘探任务中发挥重要作用。
translated by 谷歌翻译
在半导体制造中,晶圆地图缺陷模式为设施维护和产量管理提供了关键信息,因此缺陷模式的分类是制造过程中最重要的任务之一。在本文中,我们提出了一种新颖的方式来表示缺陷模式作为有限维矢量的形状,该矢量将用作分类神经网络算法的输入。主要思想是使用拓扑数据分析(TDA)的持续同源性理论提取每种模式的拓扑特征。通过使用模拟数据集进行的一些实验,我们表明,与使用卷积神经网络(CNN)的方法相比,所提出的方法在训练方面更快,更有效地训练,这是晶圆映射缺陷模式分类的最常见方法。此外,当训练数据的数量不够并且不平衡时,我们的方法优于基于CNN的方法。
translated by 谷歌翻译
The 3D-aware image synthesis focuses on conserving spatial consistency besides generating high-resolution images with fine details. Recently, Neural Radiance Field (NeRF) has been introduced for synthesizing novel views with low computational cost and superior performance. While several works investigate a generative NeRF and show remarkable achievement, they cannot handle conditional and continuous feature manipulation in the generation procedure. In this work, we introduce a novel model, called Class-Continuous Conditional Generative NeRF ($\text{C}^{3}$G-NeRF), which can synthesize conditionally manipulated photorealistic 3D-consistent images by projecting conditional features to the generator and the discriminator. The proposed $\text{C}^{3}$G-NeRF is evaluated with three image datasets, AFHQ, CelebA, and Cars. As a result, our model shows strong 3D-consistency with fine details and smooth interpolation in conditional feature manipulation. For instance, $\text{C}^{3}$G-NeRF exhibits a Fr\'echet Inception Distance (FID) of 7.64 in 3D-aware face image synthesis with a $\text{128}^{2}$ resolution. Additionally, we provide FIDs of generated 3D-aware images of each class of the datasets as it is possible to synthesize class-conditional images with $\text{C}^{3}$G-NeRF.
translated by 谷歌翻译
In both terrestrial and marine ecology, physical tagging is a frequently used method to study population dynamics and behavior. However, such tagging techniques are increasingly being replaced by individual re-identification using image analysis. This paper introduces a contrastive learning-based model for identifying individuals. The model uses the first parts of the Inception v3 network, supported by a projection head, and we use contrastive learning to find similar or dissimilar image pairs from a collection of uniform photographs. We apply this technique for corkwing wrasse, Symphodus melops, an ecologically and commercially important fish species. Photos are taken during repeated catches of the same individuals from a wild population, where the intervals between individual sightings might range from a few days to several years. Our model achieves a one-shot accuracy of 0.35, a 5-shot accuracy of 0.56, and a 100-shot accuracy of 0.88, on our dataset.
translated by 谷歌翻译
Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets. Here, we consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information. DFS is often addressed with reinforcement learning (RL), but we explore a simpler approach of greedily selecting features based on their conditional mutual information. This method is theoretically appealing but requires oracle access to the data distribution, so we develop a learning approach based on amortized optimization. The proposed method is shown to recover the greedy policy when trained to optimality and outperforms numerous existing feature selection methods in our experiments, thus validating it as a simple but powerful approach for this problem.
translated by 谷歌翻译
The purpose of this work was to tackle practical issues which arise when using a tendon-driven robotic manipulator with a long, passive, flexible proximal section in medical applications. A separable robot which overcomes difficulties in actuation and sterilization is introduced, in which the body containing the electronics is reusable and the remainder is disposable. A control input which resolves the redundancy in the kinematics and a physical interpretation of this redundancy are provided. The effect of a static change in the proximal section angle on bending angle error was explored under four testing conditions for a sinusoidal input. Bending angle error increased for increasing proximal section angle for all testing conditions with an average error reduction of 41.48% for retension, 4.28% for hysteresis, and 52.35% for re-tension + hysteresis compensation relative to the baseline case. Two major sources of error in tracking the bending angle were identified: time delay from hysteresis and DC offset from the proximal section angle. Examination of these error sources revealed that the simple hysteresis compensation was most effective for removing time delay and re-tension compensation for removing DC offset, which was the primary source of increasing error. The re-tension compensation was also tested for dynamic changes in the proximal section and reduced error in the final configuration of the tip by 89.14% relative to the baseline case.
translated by 谷歌翻译
According to the rapid development of drone technologies, drones are widely used in many applications including military domains. In this paper, a novel situation-aware DRL- based autonomous nonlinear drone mobility control algorithm in cyber-physical loitering munition applications. On the battlefield, the design of DRL-based autonomous control algorithm is not straightforward because real-world data gathering is generally not available. Therefore, the approach in this paper is that cyber-physical virtual environment is constructed with Unity environment. Based on the virtual cyber-physical battlefield scenarios, a DRL-based automated nonlinear drone mobility control algorithm can be designed, evaluated, and visualized. Moreover, many obstacles exist which is harmful for linear trajectory control in real-world battlefield scenarios. Thus, our proposed autonomous nonlinear drone mobility control algorithm utilizes situation-aware components those are implemented with a Raycast function in Unity virtual scenarios. Based on the gathered situation-aware information, the drone can autonomously and nonlinearly adjust its trajectory during flight. Therefore, this approach is obviously beneficial for avoiding obstacles in obstacle-deployed battlefields. Our visualization-based performance evaluation shows that the proposed algorithm is superior from the other linear mobility control algorithms.
translated by 谷歌翻译
In robotics and computer vision communities, extensive studies have been widely conducted regarding surveillance tasks, including human detection, tracking, and motion recognition with a camera. Additionally, deep learning algorithms are widely utilized in the aforementioned tasks as in other computer vision tasks. Existing public datasets are insufficient to develop learning-based methods that handle various surveillance for outdoor and extreme situations such as harsh weather and low illuminance conditions. Therefore, we introduce a new large-scale outdoor surveillance dataset named eXtremely large-scale Multi-modAl Sensor dataset (X-MAS) containing more than 500,000 image pairs and the first-person view data annotated by well-trained annotators. Moreover, a single pair contains multi-modal data (e.g. an IR image, an RGB image, a thermal image, a depth image, and a LiDAR scan). This is the first large-scale first-person view outdoor multi-modal dataset focusing on surveillance tasks to the best of our knowledge. We present an overview of the proposed dataset with statistics and present methods of exploiting our dataset with deep learning-based algorithms. The latest information on the dataset and our study are available at https://github.com/lge-robot-navi, and the dataset will be available for download through a server.
translated by 谷歌翻译